# Multilingual Text Generation

Thedrummer Agatha 111B V1 GGUF
This is a quantized version of TheDrummer's Agatha-111B-v1 model, offering multiple quantization type options suitable for different environments and requirements.
Large Language Model
T
bartowski
1,169
1
Cogito 14b Gptq Q4
Apache-2.0
GPTQ quantized version based on the Qwen2.5-14B large language model, supporting English and Spanish text generation tasks
Large Language Model Transformers
C
mediainbox
8,547
2
Qwen3 8B GGUF
Apache-2.0
Qwen3 is the latest generation of large language models in the Tongyi Qianwen series, offering a complete suite of dense models and Mixture of Experts (MoE) models. Based on large-scale training, Qwen3 has achieved breakthrough progress in reasoning capabilities, instruction following, agent functionalities, and multilingual support.
Large Language Model English
Q
prithivMLmods
1,222
1
Qwen3 4B GGUF
Apache-2.0
Qwen3 is the latest generation of large language models in the Tongyi Qianwen series, offering a complete combination of dense models and Mixture of Experts (MoE) models. Based on large-scale training, Qwen3 achieves breakthrough progress in reasoning capabilities, instruction following, agent functions, and multilingual support.
Large Language Model English
Q
prithivMLmods
829
1
Qwen3 1.7B GGUF
Apache-2.0
Qwen3 is the latest version of the Tongyi Qianwen series of large language models, offering a range of dense and mixture of experts (MoE) models. Based on large-scale training, Qwen3 has achieved breakthrough progress in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model English
Q
prithivMLmods
357
1
Arshgpt
MIT
Transformers is an open-source library developed by Hugging Face, providing various pretrained models for natural language processing tasks.
Large Language Model Transformers
A
arshiaafshani
69
5
Meta Llama 3.1 8B GGUF
The GGUF quantized version of Meta-Llama-3.1-8B, generated using the llama.cpp tool, supports multilingual text generation tasks.
Large Language Model Supports Multiple Languages
M
fedric95
253
3
Acip Qwen25 3b
Apache-2.0
Compressible version of Qwen2.5-3B provided by the ACIP project, supporting dynamic model size adjustment while maintaining performance
Large Language Model Transformers English
A
MerantixMomentum
31
1
Gemma 3 12b It Qat 8bit
Other
An 8-bit quantized version converted from the Google Gemma 3 12B model, suitable for image-text to text tasks.
Image-to-Text Transformers Other
G
mlx-community
149
1
Salesforce.llama Xlam 2 8b Fc R GGUF
Salesforce's 800M parameter Llama-xLAM-2 model quantized version, specialized in text generation tasks
Large Language Model
S
DevQuasar
286
1
3b Es It Ft Research Release Q4 K M GGUF
Apache-2.0
This is a GGUF format model converted from canopylabs/3b-es_it-ft-research_release, supporting Spanish and Italian.
Large Language Model Supports Multiple Languages
3
freddyaboulton
1,052
0
Nemotron H 56B Base 8K
Other
Nemotron-H-56B-Base-8K is a large language model developed by NVIDIA, featuring a hybrid Mamba-Transformer architecture, supporting 8K context length and multilingual text generation.
Large Language Model Transformers Supports Multiple Languages
N
nvidia
904
26
Nemotron H 47B Base 8K
Other
The NVIDIA Nemotron-H-47B-Base-8K is a large language model (LLM) developed by NVIDIA, designed for text completion tasks. It features a hybrid architecture primarily composed of Mamba-2 and MLP layers, with only five attention layers.
Large Language Model Transformers Supports Multiple Languages
N
nvidia
1,242
16
Rwkv7 2.9B World GGUF
Apache-2.0
RWKV-7 architecture with 2.9 billion parameters, supporting multilingual text generation tasks
Large Language Model Supports Multiple Languages
R
Mungert
748
3
RWKV7 Goose World3 2.9B HF
Apache-2.0
The RWKV-7 model adopts the flash linear attention format, supports multilingual text generation tasks, and has a parameter count of 2.9 billion.
Large Language Model Supports Multiple Languages
R
RWKV
132
7
Qwen2.5 14B It Restore
Apache-2.0
Qwen2.5-14B-it-restore is a model generated by combining the della, ties, and model stock fusion methods based on the Qwen2.5-14B and Qwen2.5-14B-Instruct models, aiming to address the decline in instruction-following capability and weakened mathematical ability.
Large Language Model Supports Multiple Languages
Q
YOYO-AI
13
2
Rwkv7 1.5B World
Apache-2.0
The RWKV-7 model adopts a flash linear attention architecture and supports multilingual text generation tasks.
Large Language Model Transformers Supports Multiple Languages
R
fla-hub
632
9
Teapotllm GGUF
MIT
Quantized version of teapotllm, supporting multilingual text generation tasks
Large Language Model Supports Multiple Languages
T
mradermacher
931
5
Llama 3.2 Taiwan 1B
Llama-3.2-Taiwan-1B is a multilingual text generation model based on Meta's Llama-3.2-1B model, with special support for Chinese (Taiwan region) and multiple other languages.
Large Language Model Transformers Supports Multiple Languages
L
lianghsun
47
4
Paligemma2 28b Pt 896
PaliGemma 2 is a Vision-Language Model (VLM) launched by Google, combining the capabilities of the Gemma 2 language model and SigLIP vision model, supporting image and text inputs to generate text outputs.
Image-to-Text Transformers
P
google
116
48
Paligemma2 10b Pt 896
PaliGemma 2 is a Vision-Language Model (VLM) launched by Google, integrating Gemma 2 capabilities, supporting image and text input to generate text output
Image-to-Text Transformers
P
google
233
32
Paligemma2 10b Pt 448
PaliGemma 2 is Google's upgraded vision-language model (VLM) that combines Gemma 2 capabilities, supporting image and text input to generate text output.
Image-to-Text Transformers
P
google
282
14
Paligemma2 3b Pt 448
PaliGemma 2 is a vision-language model based on Gemma 2, supporting image and text input to generate text output, suitable for various vision-language tasks.
Image-to-Text Transformers
P
google
3,412
45
Llama 3.2 11B Vision
Llama 3.2-Vision is a series of multimodal large language models developed by Meta, available in 11B and 90B scales, supporting image + text input and text output, optimized for visual recognition, image reasoning, image captioning, and visual question answering tasks.
Image-to-Text Transformers Supports Multiple Languages
L
meta-llama
31.12k
511
Magnum V2 12b
Apache-2.0
magnum-v2-12b is the fourth model in the series, aiming to replicate the text quality of the Claude 3 series models (especially Sonnet and Opus). It is fine-tuned based on Mistral-Nemo-Base-2407 and has powerful text generation capabilities.
Large Language Model Safetensors Supports Multiple Languages
M
anthracite-org
18.68k
89
Magnum V1 32b Gguf
Other
A text generation model fine-tuned based on Qwen1.5 32B, aiming to reproduce the text quality of Claude 3 series models
Large Language Model Supports Multiple Languages
M
anthracite-org
68
20
Paligemma 3b Ft Ocrvqa 448
PaliGemma is a versatile lightweight vision-language model (VLM) developed by Google, built on the SigLIP vision model and Gemma language model, supporting both image and text inputs with text outputs.
Image-to-Text Transformers
P
google
365
6
Llm Jp 13b V2.0
Apache-2.0
A large-scale language model developed by the Japanese collaborative project LLM-jp, supporting Japanese and English, primarily used for text generation tasks.
Large Language Model Transformers Supports Multiple Languages
L
llm-jp
570
14
Mixtral 8x22B V0.1 GGUF
Apache-2.0
Mixtral 8x22B is a 176-billion-parameter mixture of experts model released by MistralAI, supporting multilingual text generation tasks.
Large Language Model Supports Multiple Languages
M
MaziyarPanahi
170.27k
74
Tamil Summarization
Apache-2.0
This model is fine-tuned for Tamil summarization and English-Tamil translation tasks, built on the Hugging Face Transformers library.
Text Generation Transformers Supports Multiple Languages
T
suriya7
64
1
Mixtral 8x7B V0.1 GGUF
Apache-2.0
GGUF quantized version of Mixtral-8x7B-v0.1, supporting multiple bit quantization, suitable for text generation tasks.
Large Language Model Supports Multiple Languages
M
MaziyarPanahi
128
1
Swallow MS 7b V0.1
Apache-2.0
Swallow-MS-7b-v0.1 is a Japanese-enhanced model based on Mistral-7B-v0.1 with continued pretraining, developed by TokyoTech-LLM, demonstrating excellent performance on Japanese tasks.
Large Language Model Transformers Supports Multiple Languages
S
tokyotech-llm
736
27
Mixtral 8x7B Instruct V0.1 Offloading Demo
MIT
Mixtral is a multilingual text generation model based on a Mixture of Experts (MoE) architecture, supporting English, French, Italian, German, and Spanish.
Large Language Model Transformers Supports Multiple Languages
M
lavawolfiee
391
28
14B DPO Alpha
CausalLM/14B-DPO-α is a large-scale causal language model supporting Chinese and English text generation tasks, with outstanding performance in MT-Bench evaluations.
Large Language Model Transformers Supports Multiple Languages
1
CausalLM
172
118
Leo Mistral Hessianai 7b
Apache-2.0
The first publicly available commercial German foundation language model based on Mistral 7b, extending Llama-2's German capabilities through large-scale German corpus continual pre-training.
Large Language Model Transformers Supports Multiple Languages
L
LeoLM
4,435
28
Bloom 1b1 Zh
Openrail
A BLOOM language model with Traditional Chinese enhancement, jointly developed by MediaTek Research, Academia Sinica, and the National Academy for Educational Research
Large Language Model Transformers Chinese
B
ckip-joint
120
113
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase